Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods
نویسندگان
چکیده
• Let Ak denote the set of methods that observe a sequence of data pairs Y t = (F (θ, X ), F (τ , X )), 1 ≤ t ≤ k, and return an estimate θ̂(k) ∈ Θ. • Let FG denote the class of functions we want to optimize, where for each (F, P ) ∈ FG the subgradient g(θ;X) satisfies EP [‖g(θ;X)‖2∗] ≤ G. • For each A ∈ Ak and (F, P ) ∈ FG, consider the optimization gap: k(A, F, P,Θ) := f (θ̂(k))− inf θ∈Θ f (θ) = EP [ F (θ̂(k);X) ] − inf θ∈Θ EP [F (θ;X)] . • Define minimax error of zero-order optimization: k(FG,Θ) := inf A∈Ak sup (F,P )∈FG E[ k(A, F, P,Θ)].
منابع مشابه
APPROXIMATION OF STOCHASTIC PARABOLIC DIFFERENTIAL EQUATIONS WITH TWO DIFFERENT FINITE DIFFERENCE SCHEMES
We focus on the use of two stable and accurate explicit finite difference schemes in order to approximate the solution of stochastic partial differential equations of It¨o type, in particular, parabolic equations. The main properties of these deterministic difference methods, i.e., convergence, consistency, and stability, are separately developed for the stochastic cases.
متن کاملA Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets
We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. Numerical experiments indicate that the new algorithm...
متن کاملApproximation of stochastic advection diffusion equations with finite difference scheme
In this paper, a high-order and conditionally stable stochastic difference scheme is proposed for the numerical solution of $rm Ithat{o}$ stochastic advection diffusion equation with one dimensional white noise process. We applied a finite difference approximation of fourth-order for discretizing space spatial derivative of this equation. The main properties of deterministic difference schemes,...
متن کاملNonlinear Guidance Law with Finite Time Convergence Considering Control Loop Dynamics
In this paper a new nonlinear guidance law with finite time convergence is proposed. The second order integrated guidance and control loop is formulated considering a first order control loop dynamics. By transforming the state equations to the normal form, a finite time stabilizer feedback linearization technique is proposed to guarantee the finite time convergence of the system states to zero...
متن کاملStochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods
In this paper, we reformulate the stochastic generalized second-order cone complementarity problems as boxconstrained optimization problems. If satisfy the condition that the reformulation’s objective value is zero, the solutions of box-constrained optimization problems are also solutions of stochastic generalized second-order cone complementarity problems. Since the box-constrained minimizatio...
متن کامل